Minimum Degree Reordering Algorithms: A Tutorial
نویسنده
چکیده
The problem of matrix inversion is central to many applications of Numerical Linear Algebra. When the matrix to invert is dense, little can be done to avoid the costly O(n) process of Gaussian Elimination. When the matrix is symmetric, one can use the Cholesky Factorization to reduce the work of inversion (still O(n), but with a smaller coefficient). When the matrix is both sparse and symmetric, we have even more options. An entire universe of approximation algorithms exists to take advantage of the structure of sparse symmetric matrices, but some applications still require computing the “true” inverse. In these cases we must once again fall back on Cholesky, but now we use a variant called Sparse Cholesky and the amount of work required to invert the matrix has changed. Slight alteration in the ordering of our equations and unknowns in our sparse, symmetric problem yields vastly different Sparse Cholesky runtimes. Why? This is the phenomenon of fill-in. Figure 1 shows the process at work. Each step of Cholesky eliminates a row and a column from the matrix. Nonzero elements of our row a above the diagonal lead to new nonzero elements in those rows which have nonzeros in the column a. These new nonzeros are the so called fill-in and they spell trouble for the Sparse Cholesky algorithm in the form of more floating point operations. A remarkable phenomenon is that fill-in depends on the order in which we position the elements of the matrix. Figure 2 shows the same matrix as figure 1 but with the first and last columns and rows interchanged. This second configuration creates no fill in and Sparse Cholesky finishes faster. The task now seems obvious: find the ordering of the columns and rows of the matrix that creates the least fill-in. If we could do this, then we could minimize the work to invert any sparse, symmetric matrix by reordering using permutation matrices. Bad news arrives from the world of graph theory but to understand it, we must recast the fill-in process in the language of graphs. Any symmetric matrix corresponds to an undirected graph called the elimination graph (see figure 3). To build the elimination graph, create a vertex for every row and then for every row a that has a nonzero above the diagonal in the column b, construct an edge between the vertices a and b. The act of eliminating a row (and creating the resulting fill-in) using Cholesky corresponds to removing a vertex a and its corresponding edges, then forming a clique from the former neighbors of a. The fill-in corresponds
منابع مشابه
Cluster Analysis: Tutorial with R
2 Hierarchic Clustering 1 2.1 Description of Classes . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Numbers of Classes . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.3 Clustering and Ordination . . . . . . . . . . . . . . . . . . . . . . 5 2.4 Reordering a Dendrogram . . . . . . . . . . . . . . . . . . . . . . 6 2.5 Minimum Spanning Tree . . . . . . . . . . . . . . . . . . . . ....
متن کاملImproving the Run Time and Quality of Nested Dissection Ordering
When performing sparse matrix factorization, the ordering of matrix rows and columns has a dramatic impact on the factorization time. This paper describes an approach to the reordering problem that produces significantly better orderings than prior methods. The algorithm is a hybrid of nested dissection and minimum degree ordering, and combines an assortment of different algorithmic advances. N...
متن کاملHybridizing Nested Dissection and
Minimum degree and nested dissection are the two most popular reordering schemes used to reduce ll-in and operation count when factoring and solving sparse matrices. Most of the state-of-the-art ordering packages hybridize these methods by performing incomplete nested dissection and ordering by minimum degree the subgraphs associated with the leaves of the separation tree, but most often only l...
متن کاملEvaluating variable reordering strategies for SLAM
State of the art methods for state estimation and perception make use of least-squares optimization methods to perform efficient inference on noisy sensor data. Much of this efficiency is achieved by using sparse matrix factorization methods. The sparsity structure of the underlying matrix factorization which makes these optimization methods tractable is highly dependent on the choice of variab...
متن کاملHybridizing Nested Dissection and Halo Approximate Minimum Degree for Efficient Sparse Matrix Ordering
Minimum Degree and Nested Dissection are the two most popular reordering schemes used to reduce ll-in and operation count when factoring and solving sparse matrices. Most of the state-of-the-art ordering packages hybridize these methods by performing incomplete Nested Dissection and ordering by Minimum Degree the subgraphs associated with the leaves of the separation tree, but to date only loos...
متن کاملOptimal Processor Mapping for Linear-Complement Communication on Hypercubes
ÐIn this paper, we address the problem of minimizing channel contention of linear-complement communication on wormholerouted hypercubes. Our research reveals that, for traditional routing algorithms, the degree of channel contention of a linearcomplement communication can be quite large. To solve this problem, we propose an alternative approach, which applies processor reordering mapping at com...
متن کامل